在这项工作中,我们提出了一种基于从Marmoset猴的大脑收集的局部场潜在数据,提出了与帕金森病相关的新生物物理计算模型。帕金森病是一种神经退行性疾病,与大量NIGRA PARSCACTCA的多巴胺能神经元的死亡有关,这影响了大脑基底神经节 - 丘脑 - 皮质神经元电路的正常动态。尽管存在多种疾病的机制,但仍然缺少这些机制和分子发病机制的完整描述,仍然没有治愈。为了解决这种差距,已经提出了类似于动物模型中发现的神经生物学方面的计算模型。在我们的模型中,我们执行了一种数据驱动方法,其中使用差分演变优化了一组生物学限制参数。进化模型成功地类似于来自健康和Parkinsonian Marmoset脑数据的单神经元均值射击和局部场势的光谱签名。据我们所知,这是帕金森病的第一个基于来自Marmoset Monkeys的七个脑区域的同时电生理学记录的第一个计算模型。结果表明,该拟议的模型可以促进PD机制的调查,并支持可以表明新疗法的技术的发展。它还可以应用于其他计算神经科学问题,其中可以使用生物数据来适应大规模模型的脑电路。
translated by 谷歌翻译
A digital twin is defined as a virtual representation of a physical asset enabled through data and simulators for real-time prediction, optimization, monitoring, controlling, and improved decision-making. Unfortunately, the term remains vague and says little about its capability. Recently, the concept of capability level has been introduced to address this issue. Based on its capability, the concept states that a digital twin can be categorized on a scale from zero to five, referred to as standalone, descriptive, diagnostic, predictive, prescriptive, and autonomous, respectively. The current work introduces the concept in the context of the built environment. It demonstrates the concept by using a modern house as a use case. The house is equipped with an array of sensors that collect timeseries data regarding the internal state of the house. Together with physics-based and data-driven models, these data are used to develop digital twins at different capability levels demonstrated in virtual reality. The work, in addition to presenting a blueprint for developing digital twins, also provided future research directions to enhance the technology.
translated by 谷歌翻译
The concept of walkable urban development has gained increased attention due to its public health, economic, and environmental sustainability benefits. Unfortunately, land zoning and historic under-investment have resulted in spatial inequality in walkability and social inequality among residents. We tackle the problem of Walkability Optimization through the lens of combinatorial optimization. The task is to select locations in which additional amenities (e.g., grocery stores, schools, restaurants) can be allocated to improve resident access via walking while taking into account existing amenities and providing multiple options (e.g., for restaurants). To this end, we derive Mixed-Integer Linear Programming (MILP) and Constraint Programming (CP) models. Moreover, we show that the problem's objective function is submodular in special cases, which motivates an efficient greedy heuristic. We conduct a case study on 31 underserved neighborhoods in the City of Toronto, Canada. MILP finds the best solutions in most scenarios but does not scale well with network size. The greedy algorithm scales well and finds near-optimal solutions. Our empirical evaluation shows that neighbourhoods with low walkability have a great potential for transformation into pedestrian-friendly neighbourhoods by strategically placing new amenities. Allocating 3 additional grocery stores, schools, and restaurants can improve the "WalkScore" by more than 50 points (on a scale of 100) for 4 neighbourhoods and reduce the walking distances to amenities for 75% of all residential locations to 10 minutes for all amenity types. Our code and paper appendix are available at https://github.com/khalil-research/walkability.
translated by 谷歌翻译
We examined multiple deep neural network (DNN) architectures for suitability in predicting neurotransmitter concentrations from labeled in vitro fast scan cyclic voltammetry (FSCV) data collected on carbon fiber electrodes. Suitability is determined by the predictive performance in the "out-of-probe" case, the response to artificially induced electrical noise, and the ability to predict when the model will be errant for a given probe. This work extends prior comparisons of time series classification models by focusing on this specific task. It extends previous applications of machine learning to FSCV task by using a much larger data set and by incorporating recent advancements in deep neural networks. The InceptionTime architecture, a deep convolutional neural network, has the best absolute predictive performance of the models tested but was more susceptible to noise. A naive multilayer perceptron architecture had the second lowest prediction error and was less affected by the artificial noise, suggesting that convolutions may not be as important for this task as one might suspect.
translated by 谷歌翻译
We study the algorithm configuration (AC) problem, in which one seeks to find an optimal parameter configuration of a given target algorithm in an automated way. Recently, there has been significant progress in designing AC approaches that satisfy strong theoretical guarantees. However, a significant gap still remains between the practical performance of these approaches and state-of-the-art heuristic methods. To this end, we introduce AC-Band, a general approach for the AC problem based on multi-armed bandits that provides theoretical guarantees while exhibiting strong practical performance. We show that AC-Band requires significantly less computation time than other AC approaches providing theoretical guarantees while still yielding high-quality configurations.
translated by 谷歌翻译
Visual Question Answering (VQA) models often perform poorly on out-of-distribution data and struggle on domain generalization. Due to the multi-modal nature of this task, multiple factors of variation are intertwined, making generalization difficult to analyze. This motivates us to introduce a virtual benchmark, Super-CLEVR, where different factors in VQA domain shifts can be isolated in order that their effects can be studied independently. Four factors are considered: visual complexity, question redundancy, concept distribution and concept compositionality. With controllably generated data, Super-CLEVR enables us to test VQA methods in situations where the test data differs from the training data along each of these axes. We study four existing methods, including two neural symbolic methods NSCL and NSVQA, and two non-symbolic methods FiLM and mDETR; and our proposed method, probabilistic NSVQA (P-NSVQA), which extends NSVQA with uncertainty reasoning. P-NSVQA outperforms other methods on three of the four domain shift factors. Our results suggest that disentangling reasoning and perception, combined with probabilistic uncertainty, form a strong VQA model that is more robust to domain shifts. The dataset and code are released at https://github.com/Lizw14/Super-CLEVR.
translated by 谷歌翻译
Dimensionality reduction has become an important research topic as demand for interpreting high-dimensional datasets has been increasing rapidly in recent years. There have been many dimensionality reduction methods with good performance in preserving the overall relationship among data points when mapping them to a lower-dimensional space. However, these existing methods fail to incorporate the difference in importance among features. To address this problem, we propose a novel meta-method, DimenFix, which can be operated upon any base dimensionality reduction method that involves a gradient-descent-like process. By allowing users to define the importance of different features, which is considered in dimensionality reduction, DimenFix creates new possibilities to visualize and understand a given dataset. Meanwhile, DimenFix does not increase the time cost or reduce the quality of dimensionality reduction with respect to the base dimensionality reduction used.
translated by 谷歌翻译
Besides accuracy, recent studies on machine learning models have been addressing the question on how the obtained results can be interpreted. Indeed, while complex machine learning models are able to provide very good results in terms of accuracy even in challenging applications, it is difficult to interpret them. Aiming at providing some interpretability for such models, one of the most famous methods, called SHAP, borrows the Shapley value concept from game theory in order to locally explain the predicted outcome of an instance of interest. As the SHAP values calculation needs previous computations on all possible coalitions of attributes, its computational cost can be very high. Therefore, a SHAP-based method called Kernel SHAP adopts an efficient strategy that approximate such values with less computational effort. In this paper, we also address local interpretability in machine learning based on Shapley values. Firstly, we provide a straightforward formulation of a SHAP-based method for local interpretability by using the Choquet integral, which leads to both Shapley values and Shapley interaction indices. Moreover, we also adopt the concept of $k$-additive games from game theory, which contributes to reduce the computational effort when estimating the SHAP values. The obtained results attest that our proposal needs less computations on coalitions of attributes to approximate the SHAP values.
translated by 谷歌翻译
在本文中,我们研究了DRL算法在本地导航问题的应用,其中机器人仅配备有限​​量距离的外部感受传感器(例如LIDAR),在未知和混乱的工作区中朝着目标位置移动。基于DRL的碰撞避免政策具有一些优势,但是一旦他们学习合适的动作的能力仅限于传感器范围,它们就非常容易受到本地最小值的影响。由于大多数机器人在非结构化环境中执行任务,因此寻求能够避免本地最小值的广义本地导航政策,尤其是在未经训练的情况下,这是非常兴趣的。为此,我们提出了一种新颖的奖励功能,该功能结合了在训练阶段获得的地图信息,从而提高了代理商故意最佳行动方案的能力。另外,我们使用SAC算法来训练我们的ANN,这表明在最先进的文献中比其他人更有效。一组SIM到SIM和SIM到现实的实验表明,我们提出的奖励与SAC相结合的表现优于比较局部最小值和避免碰撞的方法。
translated by 谷歌翻译
语言随着时间的流逝而演变,单词含义会发生相应的变化。在社交媒体中尤其如此,因为它的动态性质会导致语义转移的速度更快,这使得NLP模型在处理新内容和趋势方面具有挑战性。但是,专门解决这些社交平台动态性质的数据集和模型的数量很少。为了弥合这一差距,我们提出了Tempowic,这是一种新的基准,尤其是旨在加快基于社交媒体的含义转变的研究。我们的结果表明,即使对于最近发行的专门从事社交媒体的语言模型,Tempowic是一个具有挑战性的基准。
translated by 谷歌翻译